Recent work has reported that AI classifiers trained on audio recordings can accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2) infection status. Here, we undertake a large scale study of audio-based deep learning classifiers, as part of the UK governments pandemic response. We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata, including reverse transcription polymerase chain reaction (PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission (REACT) randomised surveillance survey. In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854]) consistent with the findings of previous studies. However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon quantifying the utility of audio based classifiers in practical settings, we find them to be outperformed by simple predictive scores based on user reported symptoms.
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
实用的图像分割任务涉及必须从嘈杂,扭曲和/或不完整的观察值重建的图像。解决此类任务的最新方法是使用分段共同执行此次重建,使用每个分段来指导彼此。但是,迄今为止,这项工作采用了相对简单的分割方法,例如Chan - VESE算法。在本文中,我们提出了一种使用基于图的分割方法进行联合重建分割的方法,该方法一直在看到最近的兴趣增加。由于涉及的矩阵尺寸较大而引起并发症,我们展示了如何管理这些并发症。然后,我们分析我们方案的收敛属性。最后,我们将此方案应用于``两个母牛''图像的扭曲版本,该版本是先前基于图的分割文献中熟悉的``两个奶牛''图像,首先是高度噪声的版本,其次是模糊的版本,在两种情况下都可以实现高度准确的细分。我们将这些结果与通过顺序重建分割方法获得的结果进行比较,发现我们的方法与重建和分割精度相比,甚至均超过了这些方法。
translated by 谷歌翻译
由于数据隐私问题,人类的医疗数据可能具有挑战性,难以进行某些类型的实验,或禁止的相关成本。在许多设置中,可以获得来自动物模型或体外细胞系的数据,以帮助增加我们对人类数据的理解。然而,与人类数据相比,该数据已知具有低病因有效性。在这项工作中,我们使用体外数据和动物模型增强了小型人类医疗数据集。我们使用不变的风险最小化(IRM)来阐明通过考虑属于不同数据生成环境的交叉器件数据来阐明不变的功能。我们的模型识别与人类癌症发展相关的基因。我们观察到不同于使用的人和小鼠数据的数量之间的一致性,但是需要进一步的工作来获得结论性见解。作为次要贡献,我们增强了现有的开源数据集,并提供了两个均匀加工,交叉生物的同源基因匹配的数据集。
translated by 谷歌翻译